AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Enhanced Chinese Tokenization

# Enhanced Chinese Tokenization

Ziya LLaMA 13B Pretrain V1
Gpl-3.0
A large-scale pre-trained model with 13 billion parameters based on the LLaMa architecture, optimized for Chinese tokenization, completing 110 billion tokens of incremental pre-training in Chinese and English, significantly improving Chinese generation and comprehension capabilities
Large Language Model Transformers Supports Multiple Languages
Z
IDEA-CCNL
113
20
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase